机器人社区在为软机器人设备建模提供的理论工具的复杂程度中看到了指数增长。已经提出了不同的解决方案以克服与软机器人建模相关的困难,通常利用其他科学学科,例如连续式机械和计算机图形。这些理论基础通常被认为是理所当然的,这导致复杂的文献,因此,从未得到完整审查的主题。Withing这种情况下,提交的文件的目标是双重的。突出显示涉及建模技术的不同系列的常见理论根源,采用统一语言,以简化其主要连接和差异的分析。因此,对上市接近自然如下,并最终提供在该领域的主要作品的完整,解开,审查。
translated by 谷歌翻译
With water quality management processes, identifying and interpreting relationships between features, such as location and weather variable tuples, and water quality variables, such as levels of bacteria, is key to gaining insights and identifying areas where interventions should be made. There is a need for a search process to identify the locations and types of phenomena that are influencing water quality and a need to explain why the quality is being affected and which factors are most relevant. This paper addresses both of these issues through the development of a process for collecting data for features that represent a variety of variables over a spatial region, which are used for training and inference, and analysing the performance of the features using the model and Shapley values. Shapley values originated in cooperative game theory and can be used to aid in the interpretation of machine learning results. Evaluations are performed using several machine learning algorithms and water quality data from the Dublin Grand Canal basin.
translated by 谷歌翻译
This white paper lays out a vision of research and development in the field of artificial intelligence for the next decade (and beyond). Its denouement is a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants$\unicode{x2014}$what we call ''shared intelligence''. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. In this context, we understand intelligence as the capacity to accumulate evidence for a generative model of one's sensed world$\unicode{x2014}$also known as self-evidencing. Formally, this corresponds to maximizing (Bayesian) model evidence, via belief updating over several scales: i.e., inference, learning, and model selection. Operationally, this self-evidencing can be realized via (variational) message passing or belief propagation on a factor graph. Crucially, active inference foregrounds an existential imperative of intelligent systems; namely, curiosity or the resolution of uncertainty. This same imperative underwrites belief sharing in ensembles of agents, in which certain aspects (i.e., factors) of each agent's generative world model provide a common ground or frame of reference. Active inference plays a foundational role in this ecology of belief sharing$\unicode{x2014}$leading to a formal account of collective intelligence that rests on shared narratives and goals. We also consider the kinds of communication protocols that must be developed to enable such an ecosystem of intelligences and motivate the development of a shared hyper-spatial modeling language and transaction protocol, as a first$\unicode{x2014}$and key$\unicode{x2014}$step towards such an ecology.
translated by 谷歌翻译
We introduce an unsupervised learning approach that combines the truncated singular value decomposition with convex clustering to estimate within-cluster directions of maximum variance/covariance (in the variables) while simultaneously hierarchically clustering (on observations). In contrast to previous work on joint clustering and embedding, our approach has a straightforward formulation, is readily scalable via distributed optimization, and admits a direct interpretation as hierarchically clustered principal component analysis (PCA) or hierarchically clustered canonical correlation analysis (CCA). Through numerical experiments and real-world examples relevant to precision medicine, we show that our approach outperforms traditional and contemporary clustering methods on underdetermined problems ($p \gg N$ with tens of observations) and scales to large datasets (e.g., $N=100,000$; $p=1,000$) while yielding interpretable dendrograms of hierarchical per-cluster principal components or canonical variates.
translated by 谷歌翻译
Diffusion models have quickly become the go-to paradigm for generative modelling of perceptual signals (such as images and sound) through iterative refinement. Their success hinges on the fact that the underlying physical phenomena are continuous. For inherently discrete and categorical data such as language, various diffusion-inspired alternatives have been proposed. However, the continuous nature of diffusion models conveys many benefits, and in this work we endeavour to preserve it. We propose CDCD, a framework for modelling categorical data with diffusion models that are continuous both in time and input space. We demonstrate its efficacy on several language modelling tasks.
translated by 谷歌翻译
In many risk-aware and multi-objective reinforcement learning settings, the utility of the user is derived from a single execution of a policy. In these settings, making decisions based on the average future returns is not suitable. For example, in a medical setting a patient may only have one opportunity to treat their illness. Making decisions using just the expected future returns -- known in reinforcement learning as the value -- cannot account for the potential range of adverse or positive outcomes a decision may have. Therefore, we should use the distribution over expected future returns differently to represent the critical information that the agent requires at decision time by taking both the future and accrued returns into consideration. In this paper, we propose two novel Monte Carlo tree search algorithms. Firstly, we present a Monte Carlo tree search algorithm that can compute policies for nonlinear utility functions (NLU-MCTS) by optimising the utility of the different possible returns attainable from individual policy executions, resulting in good policies for both risk-aware and multi-objective settings. Secondly, we propose a distributional Monte Carlo tree search algorithm (DMCTS) which extends NLU-MCTS. DMCTS computes an approximate posterior distribution over the utility of the returns, and utilises Thompson sampling during planning to compute policies in risk-aware and multi-objective settings. Both algorithms outperform the state-of-the-art in multi-objective reinforcement learning for the expected utility of the returns.
translated by 谷歌翻译
在评估临床机器学习模型的性能时,必须考虑部署人群。当观察到的标签患者的人群只是部署人群的一部分(选择标签)时,对观察到的人群的标准模型绩效估计可能会产生误导。在这项研究中,我们描述了三类的标签选择,并模拟了五个有因果关系的场景,以评估特定选择机制如何偏向一套常见的二进制机器学习模型性能指标。 Simulations reveal that when selection is affected by observed features, naive estimates of model discrimination may be misleading. When selection is affected by labels, naive estimates of calibration fail to reflect reality.我们从因果推理文献中借用传统的加权估计器,发现当正确指定选择概率时,它们会恢复全部人口估计。然后,我们解决了监视部署的机器学习模型的性能的现实任务,该模型的相互作用与临床医生相互作用并影响标签的选择机制。我们训练三个机器学习模型来标记低收益实验室的诊断,并模拟它们减少浪费实验室利用的预期结果。我们发现,对观察到的人群的幼稚估计值降低了20%。这样的差异可能足够大,可以导致成功终止成功的临床决策支持工具。我们提出了一个更改的部署程序,该程序将注入随机化的注入随机化与传统加权估计相结合,并发现其恢复了真正的模型性能。
translated by 谷歌翻译
Adaboost机器学习算法的迭代重量更新可以实现为概率单纯性的动态图。当学习低维数据集时,该算法具有骑自行车行为的趋势,这是本文的主题。Adaboost的循环行为将自身用于直接在算法的一般非周期中无效的计算方法。从这些计算属性中,我们在Adaboost的循环行为和持续分数动力学之间提供了具体的对应关系。然后,我们探讨了此对应关系的结果,以了解该算法如何处于这种周期状态。我们打算为这项工作做的是成为该机器学习算法的循环动力学的新颖且独立的解释。
translated by 谷歌翻译
从机器学习模型中删除指定的培训数据子集的影响可能需要解决隐私,公平和数据质量等问题。删除子集后剩余数据从头开始对模型进行重新审查是有效但通常是不可行的,因为其计算费用。因此,在过去的几年中,已经看到了几种有效拆除的新方法,形成了“机器学习”领域,但是,到目前为止,出版的文献的许多方面都是不同的,缺乏共识。在本文中,我们总结并比较了七个最先进的机器学习算法,合并对现场中使用的核心概念的定义,调和不同的方法来评估算法,并讨论与在实践中应用机器相关的问题。
translated by 谷歌翻译
高阶交互事件在现实世界应用中很常见。从这些事件中编码参与者的复杂关系的学习嵌入在知识挖掘和预测任务中至关重要。尽管现有方法取得了成功,例如泊松张量分解,它们忽略了数据基础的稀疏结构,即发生的相互作用远小于所有参与者之间可能的相互作用。在本文中,我们提出了稀疏高阶交互事件(NESH)的非参数嵌入。我们杂交稀疏的超图(张量)过程和一个基质高斯过程,以捕获相互作用中的渐近结构稀疏性和参与者之间的非线性时间关系。我们证明了稀疏性比的强渐近边界(包括较低和上限),这揭示了采样结构的渐近特性。我们使用批界规范化,破坏性结构和稀疏的变分GP近似来开发有效的,可扩展的模型推理算法。我们在几个现实世界应用中证明了方法的优势。
translated by 谷歌翻译